Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Real-time face recognition on ARM platform based on deep learning
FANG Guokang, LI Jun, WANG Yaoru
Journal of Computer Applications    2019, 39 (8): 2217-2222.   DOI: 10.11772/j.issn.1001-9081.2019010164
Abstract1340)      PDF (958KB)(604)       Save
Aiming at the problem of low real-time performance of face recognition and low face recognition rate on ARM platform, a real-time face recognition method based on deep learning was proposed. Firstly, an algorithm for detecting and tracking faces in real time was designed based on MTCNN face detection algorithm. Then, a face feature extraction network was designed based on Residual Neural Network (ResNet) on ARM platform. Finally, according to the characteristics of ARM platform, Mali-GPU was used to accelerate the operation of face feature extraction network, sharing the CPU load and improving the overall running efficiency of the system. The algorithm was deployed on ARM-based Rockchip development board, and the running speed reaches 22 frames per second. Experimental results show that the recognition rate of this method is 11 percentage points higher than that of MobileFaceNet on MegaFace.
Reference | Related Articles | Metrics
Human skeleton key point detection method based on OpenPose-slim model
WANG Jianbing, LI Jun
Journal of Computer Applications    2019, 39 (12): 3503-3509.   DOI: 10.11772/j.issn.1001-9081.2019050954
Abstract656)      PDF (1060KB)(397)       Save
The OpenPose model originally used for the detection of key points in human skeleton can greatly shorten the detection cycle while maintaining the accuracy of the Regional Multi-Person Pose Estimation (RMPE) model and the Mask Region-based Convolutional Neural Network (R-CNN) model, which were proposed in 2017 and had the near-optimal detection effect at that time. At the same time, the OpenPose model has the problems such as low parameter sharing rate, high redundancy, long time-consuming and too large model scale. In order to solve the problems, a new OpenPose-slim model was proposed. In the proposed model, the network width was reduced, the number of convolution block layers was decreased, the original parallel structure was changed into sequential structure and the Dense connection mechanism was added to the inner module. The processing process was mainly divided into three modules:1) the position coordinates of human skeleton key points were detected in the key point localization module; 2) the key point positions were connected to the limb in the key point association module; 3) limb matching was performed to obtain the contour of human body in the limb matching module. There is a close correlation between processing stages. The experimental results on the MPII dataset, Common Objects in COntext (COCO) dataset and AI Challenger dataset show that, the use of four localization modules and two association modules as well as the use of Dense connection mechanism inside each module of the proposed model is the best structure. Compared with the OpenPose model, the test cycle of the proposed model is shortened to nearly 1/6, the parameter size is reduced by nearly 50%, and the model size is reduced to nearly 1/27.
Reference | Related Articles | Metrics
Modeling and verification approach for temporal properties of self-adaptive software dynamic processes
HAN Deshuai, XING Jianchun, YANG Qiliang, LI Juelong
Journal of Computer Applications    2018, 38 (3): 799-805.   DOI: 10.11772/j.issn.1001-9081.2017081992
Abstract405)      PDF (1152KB)(350)       Save
Current modeling and verification approaches for self-adaptive software rarely consider temporal properties. However, in time-critical application domain, the correct operation of self-adaptive software depends on the correctness of self-adaptive logic as well as temporal properties of self-adaptive software dynamic processes. For this end, temporal properties for self-adaptive software were explicitly defined, such as, monitoring period, delay trigger time, deadline of self-adaptive process, self-adaptive adjusting time and self-adaptive steady time. Then, a Timed Automata Network (TAN) based modeling templates for temporal properties of self-adaptive software dynamic processes were constructed. Finally, the temporal properties were formally described with Timed Computation Tree Logic (TCTL), and then were analyzed and verified. Combining with a self-adaptive example, this paper has validated the proposed approach. The results show that the proposed approach can explicitly depict temporal properties of self-adaptive software, and can reduce its formal modeling complexity.
Reference | Related Articles | Metrics
Design of indoor mobile fire-extinguishing robot system based on wireless sensor network
SHI Bing, DUAN Suolin, LI Ju, WANG Peng, ZHU Yifei
Journal of Computer Applications    2018, 38 (1): 284-289.   DOI: 10.11772/j.issn.1001-9081.2017071757
Abstract464)      PDF (956KB)(350)       Save
Aiming at the problem that the indoor mobile fire extinguishing robot can not obtain comprehensive environmental information in time by means of its inner sensors and the absence of remote network control function, a system architecture based on Wireless Sensor Network (WSN) with function of remote network control was proposed. Firstly, a WSN with mesh topology was built to collect indoor environmental information. Secondly, after analyzing the logic of parts of the system, a database and a Web server were completed to achieve the browsing function for remote clients. Finally, the function of remote network control of robot was achieved by developing the software with Socket communication function for network clients. The test results show that the rate of data packet loss for mesh topology without covering the gateway node is 2% at 1.5s sending interval, which is 67% lower than the tree topology in the same situation. By adopting the proposed system architecture, both more comprehensive indoor environment information and reduction of data packet loss rate for WSN are achieved, and the function of remote network control is also realized.
Reference | Related Articles | Metrics
Dynamic chaotic ant colony system and its application in robot path planning
LI Juan, YOU Xiaoming, LIU Sheng, CHEN Jia
Journal of Computer Applications    2018, 38 (1): 126-131.   DOI: 10.11772/j.issn.1001-9081.2017061326
Abstract505)      PDF (968KB)(304)       Save
To solve problems of population diversity and convergence speed when an Ant Colony System (ACS) is used to robot path planning, a dynamic chaos operator was introduced in the ACS. The dynamic chaotic ACS can balance population diversity and convergence speed. The core of dynamic chaotic ACS is that a Logistic chaotic operator was added to the traditional ACS to increase population diversity and improve the quality of the solutions. First, the chaotic operator was added to the pre-iteration to adjust the global pheromone value in the path to increase the population diversity of the algorithm, so as to avoid the algorithm to fall into the local optimal solution. Then, in the later stage, the ACS was used to ensure convergence speed of the dynamic chaotic ACS. The experimental results show that the dynamic chaotic ACS has better population diversity compared with the ACS for the robot path planning problem. The solution quality is higher and the convergence speed is faster. Compared with the Elitist Ant colony System (EAS) and the rank-based Ant System (ASrank), the dynamic chaotic ACS can balance the relationship between the quality of the solutions and the convergence speed. The dynamic chaotic ACS can find better optimal solutions even in the complex obstacle environment. The dynamic chaotic ACS can improve the efficiency of mobile robot path planning.
Reference | Related Articles | Metrics
Self-training method based on semi-supervised clustering and data editing
LYU Jia, LI Junnan
Journal of Computer Applications    2018, 38 (1): 110-115.   DOI: 10.11772/j.issn.1001-9081.2017071721
Abstract439)      PDF (885KB)(282)       Save
According to the problem that unlabeled samples of high confidence selected by self-training method contain less information in each iteration and self-training method is easy to mislabel unlabeled samples, a Naive Bayes self-training method based on semi-supervised clustering and data editing was proposed. Firstly, semi-supervised clustering was used to classify a small number of labeled samples and a large number of unlabeled samples, and the unlabeled samples with high membership were chosen, then they were classified by Naive Bayes. Secondly, the data editing technique was used to filter out unlabeled samples with high clustering membership which were misclassified by Naive Bayes. The data editing technique could filter noise by utilizing information of the labeled samples and unlabeled samples, solving the problem that performance of traditional data editing technique may be decreased due to lack of labeled samples. The effectiveness of the proposed algorithm was verified by comparative experiments on UCI datasets.
Reference | Related Articles | Metrics
Image denoising model with adaptive non-local data-fidelity term and bilateral total variation
GUO Li, LIAO Yu, LI Min, YUAN Hailin, LI Jun
Journal of Computer Applications    2017, 37 (8): 2334-2342.   DOI: 10.11772/j.issn.1001-9081.2017.08.2334
Abstract578)      PDF (1659KB)(657)       Save
Aiming at the problems of over-smoothing, singular structure residual noise, contrast loss and stair effect of common denoising methods, an image denoising model with adaptive non-local data fidelity and bilateral total variation regularization was proposed, which provides an adaptive non-local regularization energy function and the corresponding variation framework. Firstly, the data fidelity term was obtained by non-local means filter with adaptive weighting method. Secondly, the bilateral total variation regularization was introduced in this framework, and a regularization factor was used to balance the data fidelity term and the regularization term. At last, the optimal solutions for different noise statistics were obtained by minimizing the energy function, so as to achieve the purpose of reducing residual noise and correcting excessive smoothing. The theoretical analysis and simulation results on simulated noise images and real noise images show that the proposed image denoising model can deal with different statistical noise in image, and the Peak-Signal-to-Noise Ratio (PSNR) of it can be increased by up to 0.6 dB when compared with the adaptive non-local means filter; when compared with the total variation regularization algorithm, the subjective visual effect of the proposed model was improved obviously and the details of image texture and edges was protected very well when denoising, and the PSNR was increased by up to 10 dB, the Multi-Scale Structural Similarity index (MS-SSIM) was increased by 0.3. Therefore, the proposed denoising model can theoretically better deal with the noise and the high frequency detail information of the image, and has good practical application value in the fields of video and image resolution enhancement.
Reference | Related Articles | Metrics
Image restoration based on natural patch likelihood and sparse prior
LI Junshan, YANG Yawei, ZHU Zijiang, ZHANG Jiao
Journal of Computer Applications    2017, 37 (8): 2319-2323.   DOI: 10.11772/j.issn.1001-9081.2017.08.2319
Abstract541)      PDF (898KB)(751)       Save
Concerning the problem that images captured by optical system suffer unsteady degradation including noise, blurring and geometric distortion when imaging process is affected by defocusing, motion, atmospheric disturbance and photoelectric noise, a generic framework of image restoration based on natural patch likelihood and sparse prior was proposed. Firstly, on the basis of natural image sparse prior model, several patch likelihood models were compared. The results indicate that the image patch likelihood model can improve the restoration performance. Secondly, the image expected patch log likelihood model was constructed and optimized, which reduced the running time and simplified the learning process. Finally, image restoration based on optimized expected log likelihood and Gaussian Mixture Model (GMM) was accomplished through the approximate Maximum A Posteriori (MAP) algorithm. The experimental results show that the proposed approach can restore degraded images by kinds of blur and additive noise, and its performance outperforms the state-of-the-art image restoration methods based on sparse prior in both Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) with a better visual effect.
Reference | Related Articles | Metrics
Research of control plane' anti-attacking in software-defined network based on Byzantine fault-tolerance
GAO Jie, WU Jiangxing, HU Yuxiang, LI Junfei
Journal of Computer Applications    2017, 37 (8): 2281-2286.   DOI: 10.11772/j.issn.1001-9081.2017.08.2281
Abstract509)      PDF (941KB)(684)       Save
Great convenience has been brought by the centralized control plane of Software-Defined Network (SDN), but a lot of security risks have been introduced into it as well. In the light of single point failure, unknown vulnerabilities and back doors, static configuration and other security problems of the controller, a secure architecture for SDN based on Byzantine protocol was proposed, in which the Byzantine protocol was executed between controllers and each switching device was controlled by a controller view and control messages were decided by several controllers. Furthermore, the dynamics and heterogeneity were introduced into the proposed structure, so that the attack chain was broken and the capabilities of network active defense were enhanced; moreover, based on the quantification of the controller heterogeneity, a two-stage algorithm was designed to seek for the controller view, so that the availability of the network and the security of the controller view were ensured. Simulation results show that compared with the traditional structure, the proposed structure is more resistant to attacks.
Reference | Related Articles | Metrics
Improved ellipse fitting algorithm based on Letts criterion
CAO Junli, LI Jufeng
Journal of Computer Applications    2017, 37 (1): 273-277.   DOI: 10.11772/j.issn.1001-9081.2017.01.0273
Abstract860)      PDF (773KB)(600)       Save
The commonly used Least Square (LS) ellipse fitting algorithm based on minimum algebraic distance is simple and easy to implement, but it has no choice to the sample points, which leads to the fitting results are easily inaccurate due to the error points. According to this case, an improved ellipse fitting algorithm based on Letts criterion was proposed to overcome the shortage of LS algorithm. Firstly, the ellipse was fitted from the fitting curve by using the LS ellipse fitting algorithm based on minimum algebraic distance. Then, the algebraic distance of ellipse fitted by LS algorithm from the point distance on the fitting curve was set as the fitting point set. After the point set was verified to be normal distribution, the points which were greater than|3 σ|were determined to be outliers and eliminated by using Letts criterion. Then the steps above were repeated until all points were within the scope of  [-3σ ,]. Finally, the best fitting ellipse was obtained. The simulation experiment results show that the fitting error of the improved algorithm based on Letts criterion is within 1.0%, and its fitting accuracy is improved by at least 2 percentage points compared with the LS algorithm under the same condition. The simulation result and the practical application in roundness measurement of cigarette verify the effectiveness of the improved algorithm.
Reference | Related Articles | Metrics
Sweep coverage optimization algorithm for mobile sensor node with limited sensing
SHEN Xianhao, LI Jun, NAI He
Journal of Computer Applications    2017, 37 (1): 60-64.   DOI: 10.11772/j.issn.1001-9081.2017.01.0060
Abstract504)      PDF (917KB)(429)       Save
In the applications of mobile Wireless Sensor Network (WSN), since the sensing range of the sensor nodes is limited, the coverage analysis is a scan coverage problem for the target area. In this paper, a new scan coverage algorithm based on multi-objective optimization was proposed. In the target area, the double objective optimization strategy was used on path planning for a single mobile sensor node, which could maximize the coverage of the node and make scan coverage path to the shortest. Simulation experiments were carried out under the conditions with obstacles and without obstacles. Compared with the formation coverage algorithm for multiple nodes, the proposed algorithm can significantly reduce the mobile energy consumption while moderately reducing coverage rate.
Reference | Related Articles | Metrics
Real-time alert correlation approach based on attack planning graph
ZHANG Jing, LI Xiaopeng, WANG Hengjun, LI Junquan, YU Bin
Journal of Computer Applications    2016, 36 (6): 1538-1543.   DOI: 10.11772/j.issn.1001-9081.2016.06.1538
Abstract441)      PDF (840KB)(353)       Save
The alert correlation approach based causal relationship has the problems that it cannot be able to process massive alerts in time and the attack scenario graphs split. In order to solve the problem, a novel real-time alert correlation approach based on Attack Planning Graph (APG) was proposed. Firstly, the definition of APG and Attack Planning Tree (APT) were presented. The real-time alert correlation algorithm based on APG was proposed by creating APG model on basis of priori knowledge to reconstruct attack scenario. And then, the attack scenario was completed and the attack was predicted by applying alert inference mechanism. The experimental results show that, the proposed approach is effective in processing massive alerts and rebuilding attack scenarios with better performance in terms of real-time. The proposed approach can be applied to analyze intrusion attack intention and guide intrusion responses.
Reference | Related Articles | Metrics
Particle swarm optimization algorithm based on multi-strategy synergy
LI Jun, WANG Chong, LI Bo, FANG Guokang
Journal of Computer Applications    2016, 36 (3): 681-686.   DOI: 10.11772/j.issn.1001-9081.2016.03.681
Abstract597)      PDF (820KB)(539)       Save
Aiming at the shortage that Particle Swarm Optimization (PSO) algorithm is easy to fall into local optima and has low precision at later evolution process, a modified Multi-Strategies synergy PSO (MSPSO) algorithm was proposed. Firstly, a probability threshold value of 0.3 was set. In every iteration, if the randomly generated probability value was less than the threshold, the algorithm with opposition-based learning for the best individual was adopted to generate their opposite solutions, which improved the convergence speed and precision of PSO; otherwise, Gaussian mutation strategy was adopted for the particle position to enhance the diversity of population. Secondly, a Cauchy mutation strategy for linearly decreasing cauchy distribution scale parameter decreased was proposed, to generate better solution to guide the particle to approximate the optimum space. Finally, the simulation experiments were conducted on eight benchmark functions. MSPSO algorithm has the convergence mean value of 1.68E+01, 2.36E-283, 8.88E-16, 2.78E-05, 8.88E-16, respectively in Rosenbrock, Schwefel's P2.22, Rotated Ackley, Quadric Noise and Ackley, and can converge to the optimal solution of 0 in Sphere, Griewank and Rastrigin, which is better than GDPSO (PSO based on Gaussian Disturbance) and GOPSO (PSO based on global best Cauchy mutation and Opposition-based learning). The results show that proposed algorithm has higher convergence accuracy and can effectively avoid being trapped in local optimal solution.
Reference | Related Articles | Metrics
Semi-supervised extreme learning machine and its application in analysis of near-infrared spectroscopy data
JING Shibo, YANG Liming, LI Junhui, ZHANG Siyun
Journal of Computer Applications    2016, 36 (2): 387-391.   DOI: 10.11772/j.issn.1001-9081.2016.02.0387
Abstract486)      PDF (729KB)(933)       Save
When insufficient training information is available, supervised Extreme Learning Machine (ELM) is difficult to use. Thus applying semi-supervised learning to ELM, a Semi-Supervised ELM (SSELM) framework was proposed. However, it is difficult to find the optimal solution of SSELM due to its nonconvexity and nonsmoothness. Using combinatorial optimization method, SSELM was solved by reformulating SSELM as a linear mixed integer program. Furthermore, SSELM was used for the direct recognition of medicine and seeds datasets using Near-InfraRed spectroscopy (NIR) technology. Compared with the traditional ELM methods, the experimental results show that SSELM can improve the generation when insufficient training information is available, which indicates the feasibility and effectiveness of the proposed method.
Reference | Related Articles | Metrics
Adaptive residual error correction support vector regression prediction algorithm based on phase space reconstruction
LI Junshan, TONG Qi, YE Xia, XU Yuan
Journal of Computer Applications    2016, 36 (11): 3229-3233.   DOI: 10.11772/j.issn.1001-9081.2016.11.3229
Abstract504)      PDF (881KB)(458)       Save
Focusing on the problem of nonlinear time series prediction in the field of analog circuit fault prediction and the problem of error accumulation in traditional Support Vector Regression (SVR) multi-step prediction, a new adaptive SVR prediction algorithm based on phase space reconstruction was proposed. Firstly, the significance of SVR multi-step prediction method for time series trend prediction and the error accumulation problem caused by multi-step prediction were analyzed. Secondly, phase space reconstruction technique was introduced into SVR prediction, the phase space of the time series of the analog circuit state was reconstructed, and then the SVR prediction was carried out. Thirdly, on the basis of the two SVR prediction of the error accumulated sequence generated in the multi-step prediction process, the adaptive correction of the initial prediction error was realized. Finally, the proposed algorithm was simulated and verified. The simulation verification results and experimental results of the health degree prediction of the analog circuit show that the proposed algorithm can effectively reduce the error accumulation caused by multi-step prediction, and significantly improve the accuracy of regression estimation, and better predict the change trend of analog circuit state.
Reference | Related Articles | Metrics
Stock market volatility forecast based on calculation of characteristic hysteresis
YAO Hongliang, LI Daguang, LI Junzhao
Journal of Computer Applications    2015, 35 (7): 2077-2082.   DOI: 10.11772/j.issn.1001-9081.2015.07.2077
Abstract417)      PDF (869KB)(499)       Save

Focusing on the issue that the inflection points are hard to forecast in stock price volatility degrades the forecast accuracy, a kind of Lag Risk Degree Threshold Generalized Autoregressive Conditional Heteroscedastic in Mean (LRD-TGARCH-M) model was proposed. Firstly, hysteresis was defined based on the inconsistency phenomenon of stock price volatility and index volatility, and the Lag Degree (LD) calculation model was proposed through the energy volatility of the stock. Then the LD was used to measure the risk, and put into the average share price equation in order to overcome the Threshold Generalized Autoregressive Conditional Heteroscedastic in Mean (TGARCH-M) model's deficiency for predicting inflection points. Then the LD was put into the variance equation according to the drastic volatility near the inflection points, for the purpose of optimizing the change of variance and improving the forecast accuracy. Finally, the volatility forecasting formulas and accuracy analysis of the LRD-TGARCH-M algorithm were given out. The experimental results from Shanghai Stock, show that the forecast accuracy increases by 3.76% compared with the TGARCH-M model and by 3.44% compared with the Exponential Generalized Autoregressive Conditional Heteroscedastic in Mean (EGARCH-M) model, which proves the LRD-TGARCH-M model can degrade the errors in the price volatility forecast.

Reference | Related Articles | Metrics
Object detection based on visual saliency map and objectness
LI Junhao, LIU Zhi
Journal of Computer Applications    2015, 35 (12): 3560-3564.   DOI: 10.11772/j.issn.1001-9081.2015.12.3560
Abstract635)      PDF (889KB)(310)       Save
A novel salient object detection approach was proposed based on visual saliency map and objectness for detecting salient objects in images. For each input image, a number of bounding boxes with high objectness scores were exploited to estimate the rough object location, and a scheme of transferring the bounding box-level objectness score to pixel level was used to weight the input saliency map. The input saliency map and the weighted saliency map were adaptively binarized and the convex hull algorithm was used to obtain the maximum search region and the seed region, respectively. Finally, a global optimal solution was obtained by combining the edge density with the search region and seed region. The experimental results on the public MSRA-B dataset with 5000 images show that the proposed approach outperforms the maximum saliency region method, the region diversity maximization method and the objectness detection method in terms of precision, recall and F-measure.
Reference | Related Articles | Metrics
Parameters design and optimization of crosstalk cancellation system for two loudspeaker configuration
XU Chunlei LI Junfeng QIU Yuan XIA Risheng YAN Yonghong
Journal of Computer Applications    2014, 34 (5): 1503-1506.   DOI: 10.11772/j.issn.1001-9081.2014.05.1503
Abstract323)      PDF (747KB)(451)       Save

In three-dimensional sound reproduction with two speakers, Crosstalk Cancellation System (CCS) performance optimization often pay more attention to the effect independently by the factors such as inverse filter parameters design and loudspeaker configuration. A frequency-domain Least-Squares (LS) estimation approximation was proposed to use for the performance optimization. The relationship between these factors and their effect on CCS performance was evaluated systematically. To achieve the tradeoff of computing efficiency and system performance of crosstalk cancellation algorithm, this method obtained the optimization parameters. The effect of crosstalk cancellation was evaluated with Channel Separation (CS) and Performance Error (PE) index, and the simulation results indicate that these parameters can obtain good crosstalk cancellation effect.

Reference | Related Articles | Metrics
Enhanced distributed mobility management based on host identity protocol
JIA Lei WANG Lingjiao GUO Hua XU Yawei LI Juan
Journal of Computer Applications    2014, 34 (2): 341-345.  
Abstract582)      PDF (724KB)(389)       Save
The Host Identity Protocol (HIP) macro mobility management was introduced into Distributed Mobility Management (DMM) architecture, and Rendezvous Server (RVS) was co-located with the DMM mobility access routing functionality in Distributed Access Gateway (D-GW). By extending the HIP protocol package header parameters, the HIP BEX messages carried host identifier tuple (HIT, IP address) to the D-GW new registered, and the new D-GW forwarded the IP address using the binding massage. Through the established tunnel, data cached in the front D-GW would be later loaded to the new D-GW. This paper proposed a handover mechanism to effectively ensure data integrity, and the simulation results show that this method can effectively reduce the total signaling overhead. Furthermore, the security of HIP-based mobility management can be guaranteed.
Related Articles | Metrics
Short-term electricity load forecasting based on complementary ensemble empirical mode decomposition-fuzzy permutation and echo state network
LI Qing LI Jun MA Hao
Journal of Computer Applications    2014, 34 (12): 3651-3655.  
Abstract211)      PDF (874KB)(753)       Save

Based on Complementary Ensemble Empirical Mode Decomposition (CEEMD)-fuzzy entropy and Echo State Network (ESN) with Leaky integrator neurons (LiESN), a kind of combined forecast method was proposed for improving the precision of short-term power load forecasting. Firstly, in order to reduce the calculation scale of partial analysis for power load series and improve the accuracy of load forecasting, the power load time series was decomposed into a series of power load subsequences with obvious differences in complex degree by using CEEMD-fuzzy entropy, according to the characteristics of each subsequence, and then the corresponding LiESN forecasting submodels were built, the ultimate forecasting results could be obtained by the superposition of the forecasting model. The CEEMD-LiESN method was applied to the instance of short term electricity load forecasting of the New England region. The experimental results show that the proposed combination forecasting method has a high prediction precision.

Reference | Related Articles | Metrics
Single video temporal super-resolution reconstruction algorithm based on maximum a posterior
GUO Li LIAO Yu CHEN Weilong LIAO Honghua LI Jun XIANG Jun
Journal of Computer Applications    2014, 34 (12): 3580-3584.  
Abstract151)      PDF (823KB)(613)       Save

Any video camera equipment has certain temporal resolution, so it will cause motion blur and motion aliasing in captured video sequence. Spatial deblurring and temporal interpolation are usually adopted to solve this problem, but these methods can not solve it completely in origin. A temporal super-resolution reconstruction method based on Maximum A Posterior (MAP) probability estimation for single-video was proposed in this paper. The conditional probability model was determined in this method by reconstruction constraint, and then prior information model was established by combining temporal self-similarity in video itself. From these two models, estimation of maximum posteriori was obtained, namely reconstructed a high temporal resolution video through a single low temporal resolution video, so as to effectively remove motion blur for too long exposure time and motion aliasing for inadequate camera frame-rate. Through theoretical analysis and experiments, the validity of the proposed method is proved to be effective and efficient.

Reference | Related Articles | Metrics
Single image defogging algorithm based on HSI color space
WANG Jianxin ZHANG Youhui WANG Zhiwei ZHANG Jing LI Juan
Journal of Computer Applications    2014, 34 (10): 2990-2995.   DOI: 10.11772/j.issn.1001-9081.2014.10.2990
Abstract271)      PDF (910KB)(624)       Save

Images captured in hazy weather suffer from poor contrast and low visibility. This paper proposed a single image defogging algorithm to remove haze by combining with the characteristics of HSI color space. Firstly, the method converted original image from RGB color space to HSI color space. Then, based on the different affect to hue, saturation and intensity, a defogged model was established. Finally, the range of weight in saturation model was obtained by analyzing original images saturation, then the range of weight in intensity model was also estimated, and the original image was defogged. In comparison with other algorithms, the experimental results show that the running efficiency of the proposed method is doubled. And the proposed method effectively enhances clarity, so it is appropriate for single image defogging.

Reference | Related Articles | Metrics
Fuzzy rule extraction based on genetic algorithm
GUO Yiwen LI Jun GENG Linxiao
Journal of Computer Applications    2014, 34 (10): 2899-2903.   DOI: 10.11772/j.issn.1001-9081.2014.10.2899
Abstract227)      PDF (765KB)(313)       Save

To avoid the limitations of the traditional fuzzy rule based on Genetic Algorithm (GA), a calculation method of fuzzy control rule which contains weight coefficient was presented. GA was used to find the best weight coefficient which calculate the fuzzy rules. In this method, different weight coefficients could be provided according to different input levels, the correlation and symmetry of the weight coefficients could be used to assess all the fuzzy rules and then reduce the influence of the invalid rules. The performance comparison experiments show that the system which consists of these fuzzy rules has small overshoot, short adjustment time, and practical applications in fuzzy control. The experiments of different stimulus signals show that the system which consists of these fuzzy rules doesnt rely on stimulus signal as well as having a good tracking effect and stronger robustness.

Reference | Related Articles | Metrics
Passenger route choice behavior on transit network with real-time information at stops
ZENG Ying LI Jun ZHU Hui
Journal of Computer Applications    2013, 33 (10): 2964-2968.  
Abstract569)      PDF (835KB)(507)       Save
Along with the development of intelligent transportation information system, intelligent public transportation system is gradually popularized. Such information system is designed to provide all kinds of real-time information to transit passengers on the conditions of the network, and hence affect passengers’ travel choice behavior and improve passenger travel convenience and flexibility, so as to improve the social benefit and service level of the public transit system. Concerning the particularity of the transit network, with electronic bus stop information of Chengdu as an example, a questionnaire was designed to investigate passengers’ route choice behavior and travel intention. Qualitative and quantitative analysis and random utility theory were adopted,based on Logit model and mixed Logit model, route choice models were established, using characteristic variables of various options and passengers’ personal socio-economic attributes as explanatory variables. The method of Monte Carlo simulation and maximum likelihood were used to estimate parameters. The results indicate that the differences of route choice behavior resulting from individual preferences can be reasonably interpreted by mixed Logit model, which helps us better understand the complexity of transit behavior, so as to guide the application.
Related Articles | Metrics
Algorithm for modulation recognition based on cumulants in Rayleigh channel
ZHU Hongbo ZHANG Tianqi WANG Zhichao LI Junwei
Journal of Computer Applications    2013, 33 (10): 2765-2768.  
Abstract571)      PDF (563KB)(780)       Save
Concerning the problem of modulation identification in the Rayleigh channel, a new algorithm based on cumulants was proposed. The method was efficient and could easily classify seven kinds of signals of BPSK (Binary Phase Shift Keying), QPSK (Quadrature Phase Shift Keying), 4ASK (4-ary Amplitude Shift Keying), 16QAM (16-ary Quadrature Amplitude Modulation), 32QAM (32-ary Quadrature Amplitude Modulation), 64QAM (64-ary Quadrature Amplitude Modulation) and OFDM (Orthogonal Frequency Division Multiplexing) by using the decision tree classifier and the feature parameters that were extracted from combination of four-order cumulant and six-order cumulant. Through theoretical derivation and analysis, the algorithm is insensitive to Rayleigh fading and AWGN (Additive White Gaussian Noise). The computer simulation results show that the successful rates are over 90% when SNR (Signal-to-Noise Ratio) is higher than 4dB in Rayleigh channel, which demonstrates the feasibility and effectiveness of the proposed algorithm.
Related Articles | Metrics
Estimation mothod of the figure of merit in Ultra-wideband radio channel
LI Juan ZHANG Hao CUI Xuerong WU Chunlei
Journal of Computer Applications    2013, 33 (10): 2746-2749.  
Abstract710)      PDF (603KB)(516)       Save
UWB (UltraWideBand) technology is considered to be the most suitable for indoor wireless location and IEEE802.15.4a is the first radio ranging and positioning physical layer IEEE standard. In order to let the sender know the quality of the ranging, FoM (Figure of Merit) is added in this protocol, but how to produce FoM is not given. On the basis of analyzing the statistical characteristics of the received signal energy block, a method based on the joint parameters of skewness and the maximum slope was proposed to estimate the FoM in UWB radio channel. The simulation finds that this method can provide reference for accurate ranging and positioning, and can improve the ranging accuracy about 30% in the CM1(Channel Model) channel.
Related Articles | Metrics
Overview of complex event processing technology and its application in logistics Internet of Things
JING Xin ZHANG Jing LI Junhuai
Journal of Computer Applications    2013, 33 (07): 2026-2030.   DOI: 10.11772/j.issn.1001-9081.2013.07.2026
Abstract784)      PDF (1091KB)(587)       Save
Complex Event Processing (CEP) is currently an advanced analytical technology which deals with high velocity event streams in a real-time way and primarily gets applied in Event Driven Architecture (EDA) system domain. It is helpful to realize intelligent business in many applications. For the sake of reporting its research status, the paper introduced the basic meaning and salient feature of CEP, and proposed a system architecture model composed of nine parts. Afterwards, the main constituents of the model were reviewed in terms of key technology and its formalization. In order to illustrate how to use CEP in the logistic Internet of things, an application framework with CEP infiltrating in it was also proposed here. It can be concluded that CEP has many merits and can play an important role in application fields. Finally, the shortcomings of this research domain were pointed out and future works were discussed. The paper systematically analyzed the CEP technology in terms of theory and practice so as to further develop CEP technology.
Reference | Related Articles | Metrics
Curvature estimation for scattered point cloud data
ZHANG Fan KANG Baosheng ZHAO Jiandong LI Juan
Journal of Computer Applications    2013, 33 (06): 1662-1681.   DOI: 10.3724/SP.J.1087.2013.01662
Abstract703)      PDF (564KB)(727)       Save
For resolving the problem of curvature calculation for scattered point cloud data with strong noise, a robust statistics approach to curvature estimation was presented. Firstly the local shape at a sample point in 3D space was fitted by a quadratic surface. In addition,the fitting was performed at multiple times with randomly sampled subsets of points, and the best fitting result evaluated by variable-bandwidth maximum kernel density estimator was obtained. At last, the sample point was projected onto the best fitted surface and the curvatures of the projected point was estimated. The experimental results demonstrate that the proposed method is robust to noise and outliers. Especially with increasing noise variance, the proposed method is significantly better than the traditional parabolic fitting method.
Reference | Related Articles | Metrics
Particle swarm optimization algorithm with fast convergence and adaptive escape
SHI Xiaolu SUN Hui LI Jun ZHU Degang
Journal of Computer Applications    2013, 33 (05): 1308-1312.   DOI: 10.3724/SP.J.1087.2013.01308
Abstract814)      PDF (722KB)(572)       Save
In order to overcome the drawbacks of Particle Swarm Optimization (PSO) that converges slowly at the last stage and easily falls into local minima, this paper proposed a new PSO algorithm with convergence acceleration and adaptive escape (FAPSO) inspired by the Artificial Bee Colony (ABC) algorithm. For each particle, FAPSO conducted two search operations. One was global search and the other was local search. When a particle got stuck, the adaptive escape operator was used to search the particle again. Experiments were conducted on eight classical benchmark functions. The simulation results demonstrate that the proposed approach improves the convergence rate and solution accuracy, when compared with some recently proposed PSO versions, such as CLPSO. Besides, the results of t-test show clear superiority.
Reference | Related Articles | Metrics
Transit assignment based on stochastic user equilibrium with passengers' perception consideration
ZENG Ying LI Jun ZHU Hui
Journal of Computer Applications    2013, 33 (04): 1149-1152.   DOI: 10.3724/SP.J.1087.2013.01149
Abstract704)      PDF (763KB)(549)       Save
Concerning the special nature of the transit network, the generalized path concept that maybe easily describe passenger route choice behavior was put forward. The key cost of each path was considered. Based on the analytical framework of cumulative prospect theory and passengers' perception, a stochastic user equilibrium assignment model was developed. A simple example revealed that the limitations of the traditional method can be effectively improved by this proposed method. The basic assumption of complete rationality in traditional model was improved. It helped us enhance our understanding of the complexity of urban public transportation behavior and the rule of decision-making. The facility layout and planning of the public transportation can be determined with this result, as well as the evaluation of the level of service. In addition, it can also be used as valid data support for traffic guidance.
Reference | Related Articles | Metrics